124 research outputs found

    Bayesian simulation optimization with input uncertainty

    Get PDF
    We consider simulation optimization in the presence of input uncertainty. In particular, we assume that the input distribution can be described by some continuous parameters, and that we have some prior knowledge defining the probability distribution for these parameters. We then seek the simulation design that has the best expected performance over the possible parameters of the input distributions. Assuming correlation of performance between solutions and also between input distributions, we propose modifications of two well-known simulation optimization algorithms, Efficient Global Optimization and Knowledge Gradient with Continuous Parameters, so that they work efficiently under input uncertainty

    Continuous multi-task Bayesian optimisation with correlation

    Get PDF
    This paper considers the problem of simultaneously identifying the optima for a (continuous or discrete) set of correlated tasks, where the performance of a particular input parameter on a particular task can only be estimated from (potentially noisy) samples. This has many applications, for example, identifying a stochastic algorithm’s optimal parameter settings for various tasks described by continuous feature values. We adapt the framework of Bayesian Optimisation to this problem. We propose a general multi-task optimisation framework and two myopic sampling procedures that determine task and parameter values for sampling, in order to efficiently find the best parameter setting for all tasks simultaneously. We show experimentally that our methods are much more efficient than collecting information randomly, and also more efficient than two other Bayesian multi-task optimisation algorithms from the literature

    Efficient information collection on portfolios

    Get PDF
    This paper tackles the problem of efficiently collecting data to learn a classifier, or mapping, from each task to the best performing tool, where tasks are described by continuous features and there is a portfolio of tools to choose from. A typical example is selecting an optimization algorithm from a portfolio of algorithms, based on some features of the problem instance to be solved. Information is collected by testing a tool on a task and observing its (possibly stochastic) performance. The goal is to minimize the opportunity cost of the constructed mapping, where opportunity cost is the difference between the performance of the true best tool for each task, and the performance of the tool chosen by the constructed mapping, summed over all tasks. We propose several fully sequential information collection policies based on Bayesian statistics and Gaussian Process models. In each step, they myopically sample the (task, tool) pair that promises the highest value of the information collected. We prove optimality under certain conditions and empirically demonstrate that our methods significantly outperform standard approaches on a set of synthetic benchmark problems

    Home healthcare routing and scheduling of multiple nurses in a dynamic environment

    Get PDF
    Human resource planning in home healthcare is gaining importance day by day since companies in developed and developing countries face serious nurse and caregiver shortages. In the problem considered in this paper, the decision of patient assignment must be made immediately when the patient request arrives. Once patients have been accepted, they are serviced at the same days, times and by same nurse during their episode of care. The objective is to maximise the number of patient visits for a set of nurses during the planning horizon. We propose a new heuristic based on generating several scenarios which include current schedules of nurses, the new request under consideration, as well as randomly generated future requests to solve three decision problems: first, do we accept the patient? If so, which nurse services the patient? Finally, which days and times are weekly visits of the patient assigned to? We compare our approach with a greedy heuristic from the literature by considering some real-life aspects such as clustered service areas and skill requirements, and empirically demonstrate that it achieves significantly higher average daily visits and shorter travel times compared to the greedy method

    Evolutionary algorithms for neural network design and training

    Get PDF

    New sampling strategies when searching for robust solutions

    Get PDF
    Many real-world optimisation problems involve un- certainties, and in such situations it is often desirable to identify robust solutions that perform well over the possible future scenarios. In this paper, we focus on input uncertainty, such as in manufacturing, where the actual manufactured product may differ from the specified design but should still function well. Estimating a solution’s expected fitness in such a case is challenging, especially if the fitness function is expensive to evaluate, and its analytic form is unknown. One option is to average over a number of scenarios, but this is computationally expensive. The archive sample approximation method reduces the required number of fitness evaluations by re-using previous evaluations stored in an archive. The main challenge in the application of this method lies in determining the locations of additional samples drawn in each generation to enrich the information in the archive and reduce the estimation error. In this paper, we use the Wasserstein distance metric to approximate the possible benefit of a potential sample location on the estimation error, and propose new sampling strategies based on this metric. Contrary to previous studies, we consider a sample’s contribution for the entire population, rather than inspecting each individual separately. This also allows us to dynamically adjust the number of samples to be collected in each generation. An empirical comparison with several previously proposed archive-based sample approximation methods demonstrates the superiority of our approaches

    Adaptive control of sub-populations in evolutionary dynamic optimization

    Get PDF
    Multi-population methods are highly effective in solving dynamic optimization problems. Three factors affect this significantly: the exclusion mechanisms to avoid the convergence to the same peak by multiple sub-populations, the resource allocation mechanism which assigns the computational resources to the sub-populations, and the control mechanisms to adaptively adjust the number of sub-populations by considering the number of optima and available computational resources. In the existing exclusion mechanisms, when the distance (i.e. the distance between their best found positions) between two sub-populations becomes less than a predefined threshold, the inferior one will be removed/reinitialized. However, this leads to incapability of algorithms in covering peaks/optima that are closer than the threshold. Moreover, despite the importance of resource allocation due to the limited available computational resources between environmental changes, it has not been well studied in the literature. Finally, the number of sub-populations should be adapted to the number of optima. However, in most existing adaptive multi-population methods, there is no predefined upper bound for generating sub-populations. Consequently, in problems with large numbers of peaks, they can generate too many subpopulations sharing limited computational resources. In this paper, a multi-population framework is proposed to address the aforementioned issues by using three adaptive approaches: subpopulation generation, double-layer exclusion, and computational resource allocation. The experimental results demonstrate the superiority of the proposed framework over several peer approaches in solving various benchmark problems

    Dynamically accepting and scheduling patients for home healthcare

    Get PDF
    The importance of home healthcare is growing rapidly since populations of developed and even developing countries are getting older and the number of hospitals, retirement homes, and medical staff do not increase at the same rate. We consider the Home Healthcare Nurse Scheduling Problem where patients arrive dynamically over time and acceptance and appointment time decisions have to be made as soon as patients arrive. The objective is to maximise the average number of daily visits for a single nurse. For the sake of service continuity, patients have to be visited at the same day and time each week during their episode of care. We propose a new heuristic based on generating several scenarios which include randomly generated and actual requests in the schedule, scheduling new customers with a simple but fast heuristic, and analysing results to decide whether to accept the new patient and at which appointment day/time. We compare our approach with two greedy heuristics from the literature, and empirically demonstrate that it achieves significantly better results compared to these other two methods

    Evolving control rules for a dual-constrained job scheduling scenario

    Get PDF
    Dispatching rules are often used for scheduling in semiconductor manufacturing due to the complexity and stochasticity of the problem. In the past, simulation-based Genetic Programming has been shown to be a powerful tool to automate the time-consuming and expensive process of designing such rules. However, the scheduling problems considered were usually only constrained by the capacity of the machines. In this paper, we extend this idea to dual-constrained flow shop scheduling, with machines and operators for loading and unloading to be scheduled simultaneously. We show empirically on a small test problem with parallel workstations, re-entrant flows and dynamic stochastic job arrival that the approach is able to generate dispatching rules that perform significantly better than benchmark rules from the literature
    corecore